5,041 research outputs found
Assessing Solution Quality in Stochastic Programs
Determining whether a solution is of high quality (optimal or near optimal) is a fundamental question in optimization theory and algorithms. In this paper, we develop Monte Carlo sampling-based procedures for assessing solution quality in stochastic programs. Quality is defined via the optimality gap and our procedures' output is a confidence interval on this gap. We review a multiple-replications procedure that requires solution of, say, 30 optimization problems and then, we present a result that justifies a computationally simplified single-replication procedure that only requires solving one optimization problem. Even though the single replication procedure is computationally significantly less demanding, the resulting confidence interval might have low coverage probability for small sample sizes for some problems. We provide variants of this procedure that require two replications instead of one and that perform better empirically. We present computational results for a newsvendor problem and for two-stage stochastic linear programs from the literature. We also discuss when the procedures perform well an when they fail and provide preliminary guidelines for selecting a candidate solution
Assessing policy quality in multi-stage stochastic programming
Solving a multi-stage stochastic program with a large number of scenarios and a moderate-to-large number of stages can be computationally challenging. We develop two Monte Carlo-based methods that exploit special structures to generate feasible policies. To establish the quality of a given policy, we employ a Monte Carlo-based lower bound (for minimization problems) and use it to construct a confidence interval on the policy's optimality gap. The confidence interval can be formed in a number of ways depending on how the expected solution value of the policy is estimated and combined with the lower-bound estimator. Computational results suggest that a confidence interval formed by a tree-based gap estimator may be an effective method for assessing policy quality. Variance reduction is achieved by using common random numbers in the gap estimator
Optimization Modeling for Airlift Mobility
We describe a multi-period optimization model, implemented in GAMS, to help the U.S. Air Force improve logistical efficiency. It determines the maximum on-time throughput of cargo and passengers that can be transported within a given aircraft fleet over a given network, subject to appropriate physical and policy constraints. The model can be used to help answer questions about selecting airlift assets and about investing or divesting in airfield infrastructure
Monitoring the dynamics of Src activity in response to anti-invasive dasatinib treatment at a subcellular level using dual intravital imaging
Optimising response to tyrosine kinase inhibitors in cancer remains an extensive field of research. Intravital imaging is an emerging tool, which can be used in drug discovery to facilitate and fine-tune maximum drug response in live tumors. A greater understanding of intratumoural delivery and pharmacodynamics of a drug can be obtained by imaging drug target-specific fluorescence resonance energy transfer (FRET) biosensors in real time. Here, we outline our recent work using a Src-FRET biosensor as a readout of Src activity to gauge optimal tyrosine kinase inhibition in response to dasatinib treatment regimens in vivo. By simultaneously monitoring both the inhibition of Src using FRET imaging, and the modulation of the surrounding extracellular matrix using second harmonic generation (SHG) imaging, we were able to show enhanced drug penetrance and delivery to live pancreatic tumors. We discuss the implications of this dual intravital imaging approach in the context of altered tumor-stromal interactions, while summarising how this approach could be applied to assess other combination strategies or tyrosine kinase inhibitors in a preclinical setting
A stochastic program for optimizing military sealift subject to attack
We describe a stochastic program for planning the wartime, sealift deployment of military cargo subject to attack. The cargo moves on ships from US or allied seaports of embarkation through seaports of debarkation (SPODs) near the theater of war where it is unloaded and sent on to final , in-theater destinations. The question we ask is: Can a deployment-planning model, with probabilistic knowledge of the time and location of potential enemy attacks on SPODs, successfully hedge against those attacks? That is, can this knowledge be used to reduce the expected disruption caused by such attacks? A specialized, multi-stage stochastic mixed-integer program is developed and answers that question in the affirmative. Furthermore, little penalty is incurred with the stochastic solution when no attack occurs, and worst-case scenarios are better. In the short term, insight gained from the stochastic-programming approach also enables better scheduling using current rule-based methods
Monte Carlo Bounding Techniques for Determining Solution Quality in Stochastic Programs
Operations Research Letters, 24, pp. 47-56
Observations of X-rays and Thermal Dust Emission from the Supernova Remnant Kes 75
We present Spitzer Space Telescope and Chandra X-ray Observatory observations
of the composite Galactic supernova remnant Kes 75 (G29.7-0.3). We use the
detected flux at 24 microns and hot gas parameters from fitting spectra from
new, deep X-ray observations to constrain models of dust emission, obtaining a
dust-to-gas mass ratio M_dust/M_gas ~0.001. We find that a two-component
thermal model, nominally representing shocked swept-up interstellar or
circumstellar material and reverse-shocked ejecta, adequately fits the X-ray
spectrum, albeit with somewhat high implied densities for both components. We
surmise that this model implies a Wolf-Rayet progenitor for the remnant. We
also present infrared flux upper limits for the central pulsar wind nebula.Comment: 7 pages, 2 tables, 4 figures, uses emulateapj. Accepted for
publication in Ap
Stellar and Planetary Properties of K2 Campaign 1 Candidates and Validation of 17 Planets, Including a Planet Receiving Earth-like Insolation
The extended Kepler mission, K2, is now providing photometry of new fields
every three months in a search for transiting planets. In a recent study,
Foreman-Mackey and collaborators presented a list of 36 planet candidates
orbiting 31 stars in K2 Campaign 1. In this contribution, we present stellar
and planetary properties for all systems. We combine ground-based
seeing-limited survey data and adaptive optics imaging with an automated
transit analysis scheme to validate 21 candidates as planets, 17 for the first
time, and identify 6 candidates as likely false positives. Of particular
interest is K2-18 (EPIC 201912552), a bright (K=8.9) M2.8 dwarf hosting a 2.23
\pm 0.25 R_Earth planet with T_eq = 272 \pm 15 K and an orbital period of 33
days. We also present two new open-source software packages which enable this
analysis. The first, isochrones, is a flexible tool for fitting theoretical
stellar models to observational data to determine stellar properties using a
nested sampling scheme to capture the multimodal nature of the posterior
distributions of the physical parameters of stars that may plausibly be
evolved. The second is vespa, a new general-purpose procedure to calculate
false positive probabilities and statistically validate transiting exoplanets.Comment: 17 pages, 5 figures, 5 tables, accepted for publication in the
Astrophysical Journal. Updated to closely reflect published version in ApJ
(2015, 809, 25
- …